150 research outputs found

    Lossy Compression applied to the Worst Case Execution Time Problem

    Get PDF
    Abstract Interpretation and Symbolic Model Checking are powerful techniques in the field of testing. These techniques can verify the correctness of systems by exploring the state space that the systems occupy. As this would normally be intractable for even moderately complicated systems, both techniques employ a system of using approximations in order to reduce the size of the state space considered without compromising on the reliability of the results. When applied to Real-time Systems, and in particular Worst Case Execution Time Estimation, Abstract Interpretation and Symbolic Model Checking are primarily used to verify the temporal properties of a system. This results in a large number of applications for the techniques, from verifying the properties of components to the values given variables may take. In turn, this results in a large problem area for researchers in devising the approximations required to reduce the size of the state space whilst ensuring the analysis remains safe. This thesis examines the use of Abstract Interpretation and Symbolic Model Checking, in particular focusing on the methods used to create approximations. To this end, this thesis introduces the ideas of Information Theory and Lossy Compression. Information Theory gives a structured framework which allows quantifying or valuing information. In other domains, Lossy Compression utilises this framework to achieve reasonably accurate approximations. However, unlike Abstract Interpretation or Symbolic Model Checking, lossy compression provides ideas on how one can find information to remove with minimal consequences. Having introduced lossy compression applications, this thesis introduces a generic approach to applying lossy compression to problems encountered in Worst Case Execution Time estimation. To test that the generic approach works, two distinct problems in Worst Case Execution Time estimation are considered. The first of these is providing a Must/May analysis for the PLRU cache; whilst common in usage, the logical complexity of a PLRU cache renders it difficult to analyse. The second problem is that of loop bound analysis, with a particular focus on removing the need for information supplied by annotations, due to the inherent unverifiability of annotations

    Benefits of Using a Mars Forward Strategy for Lunar Surface Systems

    Get PDF
    This paper identifies potential risk reduction, cost savings and programmatic procurement benefits of a Mars Forward Lunar Surface System architecture that provides commonality or evolutionary development paths for lunar surface system elements applicable to Mars surface systems. The objective of this paper is to identify the potential benefits for incorporating a Mars Forward development strategy into the planned Project Constellation Lunar Surface System Architecture. The benefits include cost savings, technology readiness, and design validation of systems that would be applicable to lunar and Mars surface systems. The paper presents a survey of previous lunar and Mars surface systems design concepts and provides an assessment of previous conclusions concerning those systems in light of the current Project Constellation Exploration Architectures. The operational requirements for current Project Constellation lunar and Mars surface system elements are compared and evaluated to identify the potential risk reduction strategies that build on lunar surface systems to reduce the technical and programmatic risks for Mars exploration. Risk reduction for rapidly evolving technologies is achieved through systematic evolution of technologies and components based on Moore's Law superimposed on the typical NASA systems engineering project development "V-cycle" described in NASA NPR 7120.5. Risk reduction for established or slowly evolving technologies is achieved through a process called the Mars-Ready Platform strategy in which incremental improvements lead from the initial lunar surface system components to Mars-Ready technologies. The potential programmatic benefits of the Mars Forward strategy are provided in terms of the transition from the lunar exploration campaign to the Mars exploration campaign. By utilizing a sequential combined procurement strategy for lunar and Mars exploration surface systems, the overall budget wedges for exploration systems are reduced and the costly technological development gap between the lunar and Mars programs can be eliminated. This provides a sustained level of technological competitiveness as well as maintaining a stable engineering and manufacturing capability throughout the entire duration of Project Constellation

    A Dual Launch Robotic and Human Lunar Mission Architecture

    Get PDF
    This paper describes a comprehensive lunar exploration architecture developed by Marshall Space Flight Center's Advanced Concepts Office that features a science-based surface exploration strategy and a transportation architecture that uses two launches of a heavy lift launch vehicle to deliver human and robotic mission systems to the moon. The principal advantage of the dual launch lunar mission strategy is the reduced cost and risk resulting from the development of just one launch vehicle system. The dual launch lunar mission architecture may also enhance opportunities for commercial and international partnerships by using expendable launch vehicle services for robotic missions or development of surface exploration elements. Furthermore, this architecture is particularly suited to the integration of robotic and human exploration to maximize science return. For surface operations, an innovative dual-mode rover is presented that is capable of performing robotic science exploration as well as transporting human crew conducting surface exploration. The dual-mode rover can be deployed to the lunar surface to perform precursor science activities, collect samples, scout potential crew landing sites, and meet the crew at a designated landing site. With this approach, the crew is able to evaluate the robotically collected samples to select the best samples for return to Earth to maximize the scientific value. The rovers can continue robotic exploration after the crew leaves the lunar surface. The transportation system for the dual launch mission architecture uses a lunar-orbit-rendezvous strategy. Two heavy lift launch vehicles depart from Earth within a six hour period to transport the lunar lander and crew elements separately to lunar orbit. In lunar orbit, the crew transfer vehicle docks with the lander and the crew boards the lander for descent to the surface. After the surface mission, the crew returns to the orbiting transfer vehicle for the return to the Earth. This paper describes a complete transportation architecture including the analysis of transportation element options and sensitivities including: transportation element mass to surface landed mass; lander propellant options; and mission crew size. Based on this analysis, initial design concepts for the launch vehicle, crew module and lunar lander are presented. The paper also describes how the dual launch lunar mission architecture would fit into a more general overarching human space exploration philosophy that would allow expanded application of mission transportation elements for missions beyond the Earth-moon realm

    Generating Utilization Vectors for the Systematic Evaluation of Schedulability Tests

    Get PDF
    —This paper introduces the Dirichlet-Rescale (DRS) algorithm. The DRS algorithm provides an efficient general-purpose method of generating n-dimensional vectors of components (e.g. task utilizations), where the components sum to a specified total, each component conforms to individual constraints on the maximum and minimum values that it can take, and the vectors are uniformly distributed over the valid region of the domain of all possible vectors, bounded by the constraints. The DRS algorithm can be used to improve the nuance and quality of empirical studies into the effectiveness of schedulability tests for real-time systems; potentially making them more realistic, and leading to new conclusions. It is efficient enough for use in large-scale studies where millions of task sets need to be generated. Further, the constraints on individual task utilizations can be used for fine-grained control of task set parameters enabling more detailed exploration of schedulability test behavior. Finally, the real power of the algorithm lies in the fact that it can be applied recursively, with one vector acting as a set of constraints for the next. This is particularly useful in task set generation for mixed criticality systems and multi-core systems, where task utilizations are either multi-valued or can be decomposed into multiple constituent part

    Functional Uncertainty in Real-Time Safety-Critical Systems

    Get PDF
    Safety-critical cyber-physical systems increasingly use components that are unable to provide deterministic guarantees of the correctness of their functional outputs; rather, they characterize each outcome of a computation with an associated uncertainty regarding its correctness. The problem of assuring correctness in such systems is considered. A model is proposed in which components are characterized by bounds on the degree of uncertainty under both worst-case and typical circumstances; the objective is to assure safety under all circumstances while optimizing for performance for typical circumstances. A problem of selecting components for execution in order to obtain a result of a certain minimum uncertainty as soon as possible, while guaranteeing to do so within a specified deadline, is considered. An optimal semi-adaptive algorithm for solving this problem is derived. The scalability of this algorithm is investigated via simulation experiments comparing this semi-adaptive scheme with a purely static approach

    On the analysis of random replacement caches using static probabilistic timing methods for multi-path programs

    Get PDF
    Probabilistic hard real-time systems, based on hardware architectures that use a random replacement cache, provide a potential means of reducing the hardware over-provision required to accommodate pathological scenarios and the associated extremely rare, but excessively long, worst-case execution times that can occur in deterministic systems. Timing analysis for probabilistic hard real-time systems requires the provision of probabilistic worst-case execution time (pWCET) estimates. The pWCET distribution can be described as an exceedance function which gives an upper bound on the probability that the execution time of a task will exceed any given execution time budget on any particular run. This paper introduces a more effective static probabilistic timing analysis (SPTA) for multi-path programs. The analysis estimates the temporal contribution of an evict-on-miss, random replacement cache to the pWCET distribution of multi-path programs. The analysis uses a conservative join function that provides a proper over-approximation of the possible cache contents and the pWCET distribution on path convergence, irrespective of the actual path followed during execution. Simple program transformations are introduced that reduce the impact of path indeterminism while ensuring sound pWCET estimates. Evaluation shows that the proposed method is efficient at capturing locality in the cache, and substantially outperforms the only prior approach to SPTA for multi-path programs based on path merging. The evaluation results show incomparability with analysis for an equivalent deterministic system using an LRU cache. For some benchmarks the performance of LRU is better, while for others, the new analysis techniques show that random replacement has provably better performance

    Forecast-Based Interference : Modelling Multicore Interference from Observable Factors

    Get PDF
    While there is significant interest in the use of COTS multicore platforms for Real-time Systems, there has been very little in terms of practical methods to calculate the interference multiplier (i.e. the increase in execution time due to interference) between tasks on such systems. COTS multicore platforms present two distinct challenges: firstly, the variable interference between tasks competing for shared resources such as cache, and secondly the complexity of the hardware mechanisms and policies used, which may result in a system which is very difficult if not impossible to analyse; assuming that the exact details of the hardware are even disclosed! This paper proposes a new technique, Forecast-Based Interference analysis, which mitigates both of these issues by combining measurement-based techniques with statistical techniques and forecast modelling to enable the prediction of an interference multiplier for a given set of tasks, in an automated and reliable manner. The combination of execution times and interference multipliers can be used both in the design, e.g. for specifying timing watchdogs, and analysis, e.g. verifying schedulability

    AirTight: A Resilient Wireless Communication Protocol for Mixed-Criticality Systems

    Get PDF
    This paper describes the motivation, design, analysis and implementation of a new protocol for critical wireless communication called AirTight. Wireless communication has become a crucial part of the infrastructure of many cyber-physical applications. Many of these applications are real-time and also mixed-criticality, in that they have components/subsystems with different consequences of failure. Wireless communication is inevitably subject to levels of external interference. In this paper we represent this interference using a criticality-aware fault model; for each level of interference in the fault model we guarantee the timing behaviour of the protocol (i.e.~we guarantee that packet deadlines are satisfied for certainly levels of criticality). Although a new protocol, AirTight is built upon existing standards such as IEEE 802.15.4. A prototype implementation and protocol-accurate simulator, which are also built upon existing technologies, demonstrate the effectiveness and functionality of the protocol

    A Framework for Multi-core Schedulability Analysis Accounting for Resource Stress and Sensitivity

    Get PDF
    Timing verification of multi-core systems is complicated by contention for shared hardware resources between co-running tasks on different cores. This paper introduces the Multi-core Resource Stress and Sensitivity (MRSS) task model that characterizes how much stress each task places on resources and how much it is sensitive to such resource stress. This model facilitates a separation of concerns, thus retaining the advantages of the traditional two-step approach to timing verification (i.e. timing analysis followed by schedulability analysis). Response time analysis is derived for the MRSS task model, providing efficient context-dependent and context independent schedulability tests for both fixed priority preemptive and fixed priority non-preemptive scheduling. Dominance relations are derived between the tests, along with complexity results, and proofs of optimal priority assignment policies. The MRSS task model is underpinned by a proof-of-concept industrial case study. The problem of task allocation is considered in the context of the MRSS task model, with Simulated Annealing shown to provide an effective solution
    • …
    corecore